10 research outputs found

    COCO: The Experimental Procedure

    Get PDF
    We present a budget-free experimental setup and procedure for benchmarking numericaloptimization algorithms in a black-box scenario. This procedure can be applied with the COCO benchmarking platform. We describe initialization of and input to the algorithm and touch upon therelevance of termination and restarts.Comment: ArXiv e-prints, arXiv:1603.0877

    Tools for Landscape Analysis of Optimisation Problems in Procedural Content Generation for Games

    Get PDF
    The term Procedural Content Generation (PCG) refers to the (semi-)automatic generation of game content by algorithmic means, and its methods are becoming increasingly popular in game-oriented research and industry. A special class of these methods, which is commonly known as search-based PCG, treats the given task as an optimisation problem. Such problems are predominantly tackled by evolutionary algorithms. We will demonstrate in this paper that obtaining more information about the defined optimisation problem can substantially improve our understanding of how to approach the generation of content. To do so, we present and discuss three efficient analysis tools, namely diagonal walks, the estimation of high-level properties, as well as problem similarity measures. We discuss the purpose of each of the considered methods in the context of PCG and provide guidelines for the interpretation of the results received. This way we aim to provide methods for the comparison of PCG approaches and eventually, increase the quality and practicality of generated content in industry.Comment: 30 pages, 8 figures, accepted for publication in Applied Soft Computin

    GECCO 2023 Tutorial on Benchmarking Multiobjective Optimizers 2.0

    No full text
    International audienceBenchmarking is an important part of algorithm design, selection and recommendation---both in single- and multiobjective optimization. Benchmarking multiobjective solvers seems at first sight more complicated than benchmarking single-objective ones as there exists no natural total order on the objective space. In the past, comparisons of multiobjective solvers have therefore been done either entirely visually (at first) or via quality indicators and the attainment function. Only very recently did we realize that the quality indicator approach transforms a multiobjective problem into a single-objective (set-based) problem and thus all recent progress from the rapidly evolving single-objective benchmarking field can be transferred to the multiobjective case as well. Moreover, many multiobjective test functions have been proposed in the past but not much has changed in the last 15 or so years in terms of addressing the disadvantages of those problems (like Pareto sets on constraint boundaries, usage of distance variables, etc.).In this tutorial, we will discuss the past and future of benchmarking multiobjective optimizers. In particular, we will discuss the new view on benchmarking multiobjective algorithms by falling back on single-objective comparisons and thus being able to use all methodologies and tools from the single-objective domain such as empirical distributions of runtimes. We will also discuss the advantages and drawbacks of some widely used multiobjective test suites that we have all become familiar with over the years and explain how we can do better: by going back to the roots of what a multi-objective problem is in practice, namely the simultaneous optimization of multiple objective functions. Finally, we discuss recent advances in the visualization of (multiobjective) problem landscapes and compare the previous and newly proposed benchmark problems in the context of those landscape visualizations

    GECCO 2022 tutorial on benchmarking multiobjective optimizers 2.0

    Get PDF
    International audienc

    Anytime Performance Assessment in Blackbox Optimization Benchmarking

    No full text
    open accessInternational audienceWe present concepts and recipes for the anytime performance assessment when benchmarking optimization algorithms in a blackbox scenario. We consider runtime-oftentimes measured in number of blackbox evaluations needed to reach a target quality-to be a universally measurable cost for solving a problem. Starting from the graph that depicts the solution quality versus runtime, we argue that runtime is the only performance measure with a generic, meaningful, and quantitative interpretation. Hence, our assessment is solely based on runtime measurements. We discuss proper choices for solution quality indicators in single-and multiobjective optimization, as well as in the presence of noise and constraints. We also discuss the choice of the target values, budget-based targets, and the aggregation of runtimes by using simulated restarts, averages, and empirical cumulative distributions which generalize convergence graphs of single runs. The presented performance assessment is to a large extent implemented in the comparing continuous optimizers (COCO) platform freely available at https://github.com/numbbo/coco

    Anytime Performance Assessment in Blackbox Optimization Benchmarking

    Get PDF
    International audienceWe present concepts and recipes for the anytime performance assessment when benchmarking optimization algorithms in a blackbox scenario. We consider runtime-oftentimes measured in number of blackbox evaluations needed to reach a target quality-to be a universally measurable cost for solving a problem. Starting from the graph that depicts the solution quality versus runtime, we argue that runtime is the only performance measure with a generic, meaningful, and quantitative interpretation. Hence, our assessment is solely based on runtime measurements. We discuss proper choices for solution quality indicators in single-and multiobjective optimization, as well as in the presence of noise and constraints. We also discuss the choice of the target values, budget-based targets, and the aggregation of runtimes by using simulated restarts, averages, and empirical cumulative distributions which generalize convergence graphs of single runs. The presented performance assessment is to a large extent implemented in the comparing continuous optimizers (COCO) platform freely available at https://github.com/numbbo/coco
    corecore